15 research outputs found

    Topics (Disinformation)

    Get PDF
    The topic variable is used in research on disinformation to analyze thematic differences in the content of false news, rumors, conspiracies, etc. Those topics are frequently based on national news agendas, i.e. producers of disinformation address current national or world events (e.g. elections, immigration, etc.) (Humprecht, 2019). Field of application/theoretical foundation: Topics are a central yet under-researched aspect of research on online disinformation (Freelon & Wells, 2020). The research interest is to find out which topics are taken up and spread by disinformation producers. The focus of this research is both on specific key topics for which sub-themes are identified (e.g. elections, climate change, Covid-19) and, more generally, on the question of which misleading content is disseminated (mostly on social media). Methodologically, the identification of topics is often a first step followed by further analysis of the content (Ferrara, 2017). Thus, the analysis of topics is linked to the detection of disinformation, which represents a methodological challenge. Topics can be identified inductively or deductively. Inductive analyses often use a data corpus, for example social media data, and try to identify topics using techniques such as topic modelling (e.g. Boberg et al., 2020). Deductive analyses frequently use topic lists to classify contents. Topics lists are initially created based on the literature on the respective topic or with the help of databases, e.g. by fact-checkers. References/combination with other methods of data collection: Studies on topics of disinformation use both manual and automated content analysis or combinations of both to investigate the occurrence of different topics in texts (Boberg et al., 2020; Bradshaw, Howard, Kollanyi, & Neudert, 2020). Inductive and deductive approaches have been combined with qualitative text analyses to identify topic categories which are subsequently coded (Humprecht, 2019; Marchal, Kollanyi, Neudert, & Howard, 2019). Example studies: Ferrara (2017); Humprecht (2019), Marchal et al. (2019)   Table 1. Summary of selected studies Author(s) Sample Values Reliability Ferrara (2017) Content type: Tweets Sampling period: April 27, 2017 to May 7, 201 Sample size: 16.65 million tweets Sampling: List of 23 key words and top 20 hashtags Keywords: France2017, Marine2017, AuNomDuPeuple, FrenchElection, FrenchElections, Macron, LePen, MarineLePen, FrenchPresidentialElection, JeChoisisMarine, JeVoteMarine, JeVoteMacron JeVote, Presidentielle2017, ElectionFracaise, JamaisMacron, Macron2017, EnMarche, MacronPresident Hashtags: #Macron, #Presidentielle2017, #fn, #JeVote, #LePen, #France, #2017LeDebat, #MacronLeaks, #Marine2017, #debat2017, #2017LeDĂ©bat, #MacronGate, #MarineLePen, #Whirlpool, #EnMarche, #JeVoteMacron, #MacronPresident, #JamaisMacron, #FrenchElection - Humprecht (2019) Content type: fact checks Outlet/ country: 2 fact checkers per country (AT, DE, UK, US) Sampling period: June 1, 2016 to September 30, 2017 Sample size: N=651 Unit of analysis: story/ fact-check No. of topics coded: main topic per fact-check Level of analysis: fact checks and fact-checker conspiracy theory, education, election campaign, environment, government/public administration (at the time when the story was published), health, immigration/integration, justice/crime, labor/employment, macroeconomics/economic regulation, media/journalism, science/ technology, war/terror, others Krippendorff’s alpha = 0.71 Marchal et al. (2019) Content type: tweets related to the European elections 2019 Sampling: hashtags in English, Catalan, French, German, Italian, Polish, Spanish, Swedish Sampling criteria: (1) contained at least one of the relevant hashtags; (2) contained the hashtag in the URL shared, or the title of its webpage; (3) were a retweet of a message that contained a relevant hashtag or mention in the original message; (4) were a quoted tweet referring to a tweet with a relevant hashtag or mention Sampling period: 5 April and 20 April, 2019 Sample size: 584,062 tweets from 187,743 unique users Religion Islam (Muslim, Islam, Hijab, Halal, Muslima, Minaret) Religion Christianity (Christianity, Church, Priest) Immigration (Asylum Seeker, Refugee, Migrants, Child Migrant, Dual Citizenship, Social Integration) Terrorism (ISIS, Djihad, Terrorism, Terrorist Attack) Political Figures/Parties (Vladimir Putin, Enrico Mezzetti, Emmanuel Macron, ANPI, Arnold van Doorn, Islamic Party for Unity, Nordic Resistance Movement) Celebrities (Lara Trump, Alba Parietti) Crime (Vandalism, Rape, Sexual Assault, Fraud, Murder, Honour Killing) Notre-Dame Fire (Notre-Dame Fire, Reconstruction) Political Ideology (Anti-Fascism, Fascism, Nationalism) Social Issues (Abortion, Bullying, Birth Rate) -   References Boberg, S., Quandt, T., Schatto-Eckrodt, T., & Frischlich, L. (2020). Pandemic Populism: Facebook Pages of Alternative News Media and the Corona Crisis -- A Computational Content Analysis, 2019. Retrieved from http://arxiv.org/abs/2004.02566 Bradshaw, S., Howard, P. N., Kollanyi, B., & Neudert, L. M. (2020). Sourcing and Automation of Political News and Information over Social Media in the United States, 2016-2018. Political Communication, 37(2), 173–193. https://doi.org/10.1080/10584609.2019.1663322 Ferrara, E. (2017). Disinformation and social bot operations in the run up to the 2017 French presidential election. First Monday, 22(8). https://doi.org/10.5210/FM.V22I8.8005 Freelon, D., & Wells, C. (2020). Disinformation as Political Communication. Political Communication, 37(2), 145–156. https://doi.org/10.1080/10584609.2020.1723755 Humprecht, E. (2019). Where ‘fake news’ flourishes: a comparison across four Western democracies. Information Communication and Society, 22(13), 1973–1988. https://doi.org/10.1080/1369118X.2018.1474241 Marchal, N., Kollanyi, B., Neudert, L., & Howard, P. N. (2019). Junk News During the EU Parliamentary Elections?: Lessons from a Seven-Language Study of Twitter and Facebook. Oxford, UK. Retrieved from https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/05/EU-Data-Memo.pd

    Types (Disinformation)

    Get PDF
    Disinformation can appear in various forms. Firstly, different formats can be manipulated, such as texts, images, and videos. Secondly, the amount and degree of falseness can vary, from completely fabricated content to decontextualized information to satire that intentionally misleads recipients. Therefore, the forms and format of disinformation might vary and differ not only between the supposedly clear categories of “true” and “false”. Field of application/theoretical foundation: Studies on types of disinformation are conducted in various fields, e.g. political communication, journalism studies, and media effects studies. Among other things, the studies identify the most common types of mis- or disinformation during certain events (Brennen, Simon, Howard, & Nielsen, 2020), analyze and categorize the behavior of different types of Twitter accounts (Linvill & Warren, 2020), and investigate the existence of serveral types of “junk news” in different national media landscapes (Bradshaw, Howard, Kollanyi, & Neudert, 2020; Neudert, Howard, & Kollanyi, 2019). References/combination with other methods of data collection: Only relatively few studies use combinations of methods. Some studies identify different types of disinformation via qualitative and quantitative content analyses (Bradshaw et al., 2020; Brennen et al., 2020; Linvill & Warren, 2020; Neudert et al., 2019). Others use surveys to analyze respondents’ concerns as well as exposure towards different types of mis- and disinformation (Fletcher, 2018). Example studies: Brennen et al. (2020); Bradshaw et al. (2020); Linvill and Warren (2020)   Information on example studies: Types of disinformation are defined by the presentation and contextualization of content and sometimes additionally by details (e.g. professionalism) about the communicator. Studies either deductively identify different types of disinformation (Brennen et al., 2020) by applying the theoretical framework by Wardle (2019), or additionally inductively identify and build different categories based on content analyses (Bradshaw et al., 2020; Linvill & Warren, 2020).   Table 1. Types of mis-/disinformation by Brennen et al. (2020) Category Specification Satire or parody - False connection Headlines, visuals or captions don’t support the content Misleading content Misleading use of information to frame an issue or individual, when facts/information are misrepresented or skewed False context Genuine content is shared with false contextual information, e.g. real images which have been taken out of context Imposter content Genuine sources, e.g. news outlets or government agencies, are impersonated Fabricated content Content is made up and 100% false; designed to deceive and do harm Manipulated content Genuine information or imagery is manipulated to deceive, e.g. deepfakes or other kinds of manipulation of audio and/or visuals Note. The categories are adapted from the theoretical framework by Wardle (2019). The coding instruction was: “To the best of your ability, what type of misinformation is it? (Select one that fits best.)” (Brennen et al., 2020, p. 12). The coders reached an intercoder reliability of a Cohen’s kappa of 0.82.   Table 2. Criteria for the “junk news” label by Bradshaw et al. (2020) Criteria Reference Specification Professionalism refers to the information about authors and the organization “Sources do not employ the standards and best practices of professional journalism, including information about real authors, editors, and owners” (pp. 174-175). “Distinct from other forms of user-generated content and citizen journalism, junk news domains satisfy the professionalism criterion because they purposefully refrain from providing clear information about real authors, editors, publishers, and owners, and they do not publish corrections of debunked information” (p. 176). Procedure: -        Systematically checked the about pages of domains: Contact information, information about ownership and editors, and other information relating to professional standards -        Reviewed whether the sources appeared in third-party fact-checking reports -        Checked whether sources published corrections of fact-checked reporting. Examples: zerohedge.com, conservative- fighters.org, deepstatenation.news Counterfeit refers to the layout and design of the domain itself “(
) [S]ources mimic established news reporting by using certain fonts, having branding, and employing content strategies. (
) Junk news is stylistically disguised as professional news by the inclusion of references to news agencies and credible sources as well as headlines written in a news tone with date, time, and location stamps. In the most extreme cases, outlets will copy logos and counterfeit entire domains” (p. 176). Procedure: -        Systematically reviewed organizational information about the owner and headquarters by checking sources like Wikipedia, the WHOIS database, and third-party fact-checkers (like Politico or MediaBiasFactCheck) -        Consulted country-specific expert knowledge of the media landscape in the US to identify counterfeiting websites. Examples: politicoinfo.com, NBC.com.co Style refers to the content of the domain as a whole “ (
) [S]tyle is concerned with the literary devices and language used throughout news reporting. (
) Designed to systematically manipulate users for political purposes, junk news sources deploy propaganda techniques to persuade users at an emotional, rather than cognitive, level and employ techniques that include using emotionally driven language with emotive expressions and symbolism, ad hominem attacks, misleading headlines, exaggeration, excessive capitalization, unsafe generalizations, logical fallacies, moving images and lots of pictures or mobilizing memes, and innuendo (Bernays, 1928; Jowette & O’Donnell, 2012; Taylor, 2003). (
) Stylistically, problematic sources will employ propaganda and clickbait techniques to varying degrees. As a result, determining style can be highly complex and context dependent” (p. 177). Procedure: -        Examined at least five stories on the front page of each news source in depth during the US presidential campaign in 2016 and the SOTU address in 2018 -        Checked the headlines of the stories and the content of the articles for literary and visual propaganda devices -        Considered as stylistically problematic if three of the five stories systematically exhibited elements of propaganda Examples: 100percentfedup.com, barenakedislam.com, theconservativetribune.com, dangerandplay.com Credibility refers to the content of the domain as a whole “(
) [S]ources rely on false information or conspiracy theories and do not post corrections” (p. 175). “[They] typically report on unsubstantiated claims and rely on conspiratorial and dubious sources. (
) Junk news sources that satisfy the credibility criterion frequently fail to vet their sources, do not consult multiple sources, and do not fact-check” (p. 178). Procedure: -        Examined at least five front page stories and reviewed the sources that were cited -        Reviewed pages to see if they included known conspiracy theories on issues such as climate change, vaccination, and “Pizzagate” -        Checked third-party fact-checkers for evidence of debunked stories and conspiracy theories Examples: infowars.com, endingthefed.com, thegatewaypundit.com, newspunch.com Bias refers to the content of the domain as a whole “(
) [H]yper-partisan media websites and blogs (
) are highly biased, ideologically skewed, and publish opinion pieces as news. Basing their stories on the same events, these sources manage to convey strikingly different impressions of what actually transpired. It is such systematic differences in the mapping from facts to news reports that we call bias. (
) Bias exists on both sides of the political spectrum. Like determining style, determining bias can be highly complex and context dependent” (pp. 177-178). Procedure: -        Checked third-party sources that systematically evaluate media bias -        If the domain was not evaluated by a third party, the authors examined the ideological leaning of the sources used to support stories appearing on the domain -        Evaluation of the labeling of politicians (are there differences between the left and the right?) -        Identified bias created through the omission of unfavorable facts, or through writing that is falsely presented as being objective Examples on the right: breitbart.com, dailycaller.com, infowars.com, truthfeed.com Examples on the left: occupydemocrats.com, addictinginfo.com, bipartisanreport.com Note. The coders reached an intercoder reliability of a Krippendorff’s kappa of 0.89. The label of “junk news” is defined by fulfilling at least three of the five criteria. It refers to sources that deliberately publish misleading, deceptive, or incorrect information packaged as real news.   Table 3. Identified types of IRA-associated Twitter accounts by Linvill and Warren (2020) Category Specification Right troll “Twitter-handles broadcast nativist and right-leaning populist messages. These handles’ themes were distinct from mainstream Republicanism. (
) They rarely broadcast traditionally important Republican themes, such as taxes, abortion, and regulation, but often sent divisive messages about mainstream and moderate Republicans. (
) The overwhelming majority of handles, however, had limited identifying information, with profile pictures typically of attractive, young women” (p. 5). Hashtags frequently used by these accounts: #MAGA (i.e., “Make America Great Again,”), #tcot (i.e. “Top Conservative on Twitter), #AmericaFirst, and #IslamKills Left troll “These handles sent socially liberal messages, with an overwhelming focus on cultural identity. (
) They discussed gender and sexual identity (e.g., #LGBTQ) and religious identity (e.g., #MuslimBan), but primarily focused on racial identity. Just as the Right Troll handles attacked mainstream Republican politicians, Left Troll handles attacked mainstream Democratic politicians, particularly Hillary Clinton. (
) It is worth noting that this account type also included a substantial portion of messages which had no clear political motivation” (p. 6). Hashtags frequently used by these accounts: #BlackLivesMatter, #PoliceBrutality, and #BlackSkinIsNotACrime Newsfeed “These handles overwhelmingly presented themselves as U.S. local news aggregators and had descriptive names (
). These accounts linked to legitimate regional news sources and tweeted about issues of local interest (
). A small number of these handles, (
) tweeted about global issues, often with a pro-Russia perspective” (p. 6). Hashtags frequently used by these accounts: #news, #sports, and #local Hashtag gamer “These handles are dedicated almost entirely to playing hashtag games, a popular word game played on Twitter. Users add a hashtag to a tweet (e.g., #ThingsILearnedFromCartoons) and then answer the implied question. These handles also posted tweets that seemed organizational regarding these games (
). Like some tweets from Left Trolls, it is possible such tweets were employed as a form of camouflage, as a means of accruing followers, or both. Other tweets, however, often using the same hashtag as mundane tweets, were socially divisive (
)” (p. 7). Hashtags frequently used by these accounts: #ToDoListBeforeChristmas, #ThingsYouCantIgnore, #MustBeBanned, and #2016In4Words Fearmonger “These accounts spread disinformation regarding fabricated crisis events, both in the U.S. and abroad. Such events included non-existent outbreaks of Ebola in Atlanta and Salmonella in New York, an explosion at the Columbian Chemicals plan in Louisiana, a phosphorus leak in Idaho, as well as nuclear plant accidents and war crimes perpetrated in Ukraine. (
) These accounts typically tweeted a great deal of innocent, often frivolous content (i.e. song lyrics or lines of poetry) which were potentially automated. With this content these accounts often added popular hashtags such as #love (
) and #rap (
). These accounts changed behavior sporadically to tweet disinformation, and that output was produced using a different Twitter client than the one used to produce the frivolous content. (
) The Fearmonger category was the only category where we observed some inconsistency in account activity. A small number of handles tweeted briefly in a manner consistent with the Right Troll category but switched to tweeting as a Fearmonger or vice-versa” (p. 7). Hashtags frequently used by these accounts: #Fukushima2015 and #ColumbianChemicals Note. The categories were identified qualitatively analyzing the content produced and were then refined and explored more detailed via a quantitative analysis. The coders reached a Krippendorff’s alpha intercoder-reliability of 0.92.   References Bradshaw, S., Howard, P. N., Kollanyi, B., & Neudert, L.?M. (2020). Sourcing and automation of political news and information over social media in the United States, 2016-2018. Political Communication, 37(2), 173–193. Brennen, J. S., Simon, F. M., Howard, P. N. [P. N.], & Nielsen, R. K. (2020). Types, sources, and claims of covid-19 misinformation. Reuters Institute. Retrieved from http://www.primaonline.it/wp-content/uploads/2020/04/COVID-19_reuters.pdf Fletcher, R. (2018). Misinformation and disinformation unpacked. Reuters Institute. Retrieved from http://www.digitalnewsreport.org/survey/2018/misinformation-and-disinformation-unpacked/ Linvill, D. L., & Warren, P. L. (2020). Troll factories: Manufacturing specialized disinformation on Twitter. Political Communication, 1–21. Neudert, L.?M., Howard, P., & Kollanyi, B. (2019). Sourcing and automation of political news and information during three European elections. Social Media + Society, 5(3). https://doi.org/10.1177/2056305119863147 Wardle, C. (2019). First Draft's essential guide to understanding information disorder. UK: First Draft News. Retrieved from https://firstdraftnews.org/wp-content/uploads/2019/10/Information_Disorder_Digital_AW.pdf?x7670

    Publishers/sources (Disinformation)

    Get PDF
    Recent research has mainly used two approaches to identify publishers or sources of disinformation: First, alternative media are identified as potential publishers of disinformation. Second, potential publishers of disinformation are identified via fact-checking websites. Samples created using those approaches can partly overlap. However, the two approaches differ in terms of validity and comprehensiveness of the identified population. Sampling of alternative media outlets is theory-driven and allows for cross-national comparison. However, researchers face the challenge to identify misinforming content published by alternative media outlets. In contrast, fact-checked content facilitates the identification of a given disinformation population; however, fact-checker often have a publication bias focusing on a small range of (elite) actors or sources (e.g. individual blogs, hyper partisan news outlets, or politicians). In both approaches it is important to describe, compare and, if possible, assign the outlets to already existing categories in order to enable a temporal and spatial comparison. Approaches to identify sources/publishers: Besides the operationalization of specific variables analyzed in the field of disinformation, the sampling procedure presents a crucial element to operationalize disinformation itself. Following the approach of detecting disinformation through its potential sources or publishers (Li, 2020), research analyzes alternative media (Bachl, 2018; Boberg, Quandt, Schatto-Eckrodt, & Frischlich, 2020; Heft et al., 2020) or identifies a various range of actors or domains via fact-checking sites (Allcott & Gentzkow, 2017; Grinberg et al., (2019); Guess, Nyhan & Reifler, 2018). Those two approaches are explained in the following. Alternative media as sources/publishers The following procedure summarizes the approaches used in current research for the identification of relevant alternative media outlets (following Bachl, 2018; Boberg et al., 2020; Heft et al., 2020). Snowball sampling to define the universe of alternative media outlets may consists of the following steps: Sample of outlets identified in previous research Consultation of search engines and news articles Departing from a potential prototype, websites provide information about digital metrics (Alexa.com or Similarweb.com). For example, Similarweb.com shows three relevant lists per outlet: “Top Referring Sites” (which websites are sending traffic to this site), “Also visited websites” (overlap with users of other websites), and “Competitors & Similar Sites” (similarity defined by the company) Definition of alternative media outlets Journalistic outlets (for example, excluding blogs and forums) with current, non-fictional and regular content Self-description of the outlets in a so-called “about us” section or in a mission statement, which underlines the relational perspective of being an alternative to the mainstream media. This description may for example include keywords such as alternative, independent, unbiased, critical or is in line with statements like “presenting the real/true views/facts” or “covering what the mainstream media hides/leaves out”. Use of predefined dimensions and categories of alternative media (Frischlich, Klapproth, & Brinkschulte, 2020; Holt, Ustad Figenschou, & Frischlich, 2019) Sources/publishers via fact-checking sites Following previous research in the U.S., Guess et al. (2018) identified “Fake news domains” (focusing on pro-Trump and pro-Clinton content) which published two or more articles that were coded as “fake news” by fact-checkers (derived from Allcott & Gentzkow, 2017). Grinberg et al. (2019) identified three classes of “fake news sources” differentiated by severity and frequency of false content (see Table 1). These three sources are part of a total of six website labels. The researchers additionally coded the sites into reasonable journalism, low quality journalism, satire and sites that were not applicable. The coders reached a percentual agreement of 60% for the labeling of the six categories, and 80% for the distinction of fake and non-fake categories.   Table 1. Three classes of “fake news sources” by Grinberg et al. (2019) Label Specification Identification Definition Black domains Based on previous studies: These domains published at least two articles which were declared as “fake news” by fact-checking sites. Based on preexisting lists constructed by fact-checkers, journalists and academics (Allcott & Gentzkow, 2017; Guess et al., 2018) Almost exclusively fabricated stories Red domains Major or frequent falsehoods that are in line with the site's political agenda. Prejudiced: Site presents falsehoods that focus upon one group with regards to race / religion / ethnicity / sexual orientation. Major or frequent falsehoods with little regard for the truth, but not necessarily to advance a certain political agenda. By the fact-checker snopes.com as sources of questionable claims; then manually differentiated between red and orange domains Falsehoods that clearly reflected a flawed editorial process Orange domains Moderate or occasional falsehoods to advance political agenda. Sensationalism: exaggerations to the extent that the article becomes misleading and inaccurate. Occasionally prejudiced articles: Site at times presents individual articles that contain falsehoods regarding race / religion / ethnicity / sexual orientation Openly states that the site may not be inaccurate, fake news, or cannot be trusted to provide factual news. Moderate or frequent falsehoods with little regard for the truth, but not necessarily to advance a certain political agenda. Conspiratorial: explanations of events that involves unwarranted suspicion of government cover ups or supernatural agents. By the fact-checker snopes.com as sources of questionable claims; then manually differentiated between red and orange domains Negligent and deceptive information but are less systemically flawed   Supplementary materials: https://science.sciencemag.org/content/sci/suppl/2019/01/23/363.6425.374.DC1/aau2706_Grinberg_SM.pdf (S5 and S6) Coding scheme and source labels: https://zenodo.org/record/2651401#.XxGtJJgzaUl (LazerLab-twitter-fake-news-replication-2c941b8\domains\domain_coding\data)   References Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. Bachl, M. (2018). (Alternative) media sources in AfD-centered Facebook discussions. Studies in Communication | Media, 7(2), 256–270. Bakir, V., & McStay, A. (2018). Fake news and the economy of emotions. Digital Journalism, 6(2), 154–175. Boberg, S., Quandt, T., Schatto-Eckrodt, T., & Frischlich, L. (2020, April 6). Pandemic populism: Facebook pages of alternative news media and the corona crisis -- A computational content analysis. Retrieved from http://arxiv.org/pdf/2004.02566v3 Farkas, J., Schou, J., & Neumayer, C. (2018). Cloaked Facebook pages: Exploring fake Islamist propaganda in social media. New Media & Society, 20(5), 1850–1867. Frischlich, L., Klapproth, J., & Brinkschulte, F. (2020). Between mainstream and alternative – Co-orientation in right-wing populist alternative news media. In C. Grimme, M. Preuss, F. W. Takes, & A. Waldherr (Eds.), Lecture Notes in Computer Science. Disinformation in open online media (Vol. 12021, pp. 150–167). Cham: Springer International Publishing. Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. Presidential election. Science (New York, N.Y.), 363(6425), 374–378. Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1). https://doi.org/10.1126/sciadv.aau4586 Guess, A., Nyhan, B., & Reifler, J. (2018). Selective exposure to misinformation: Evidence from the consumption of fake news during the 2016 US presidential campaign. European Research Council, 9(3), 1–14. Heft, A., Mayerhöffer, E., Reinhardt, S., & KnĂŒpfer, C. (2020). Beyond Breitbart: Comparing right?wing digital news infrastructures in six Western democracies. Policy & Internet, 12(1), 20–45. Holt, K., Ustad Figenschou, T., & Frischlich, L. (2019). Key dimensions of alternative news media. Digital Journalism, 7(7), 860–869. Nelson, J. L., & Taneja, H. (2018). The small, disloyal fake news audience: The role of audience availability in fake news consumption. New Media & Society, 20(10), 3720–3737

    Identifying the Drivers Behind the Dissemination of Online Misinformation: A Study on Political Attitudes and Individual Characteristics in the Context of Engaging With Misinformation on Social Media

    Full text link
    The increasing dissemination of online misinformation in recent years has raised the question which individuals interact with this kind of information and what role attitudinal congruence plays in this context. To answer these questions, we conduct surveys in six countries (BE, CH, DE, FR, UK, and US) and investigate the drivers of the dissemination of misinformation on three noncountry specific topics (immigration, climate change, and COVID-19). Our results show that besides issue attitudes and issue salience, political orientation, personality traits, and heavy social media use increase the willingness to disseminate misinformation online. We conclude that future research should not only consider individual’s beliefs but also focus on specific user groups that are particularly susceptible to misinformation and possibly caught in social media “fringe bubbles.

    Types (Disinformation)

    Get PDF
    Disinformation can appear in various forms. Firstly, different formats can be manipulated,such as texts, images, and videos. Secondly, the amount and degree of falseness can vary, from completely fabricated content to decontextualized information to satire that intentionally mis-leads recipients. Therefore, the forms and format of disinformation might vary and differ not only between the supposedly clear categories of “true” and “false”

    Topics (Disinformation)

    Get PDF
    The topic variabl is used in research on disinformation to analyze thematic differences in the content of false news, rumors, conspiracies, etc. Those topics are frequently based on national news agendas, i.e. producers of disinformation address current national or world events (e.g. elections, immigration, etc.) (Humprecht, 2019)

    Publishers/sources (Disinformation)

    Get PDF
    Recent research has mainly used two approaches to identify publishers or sources of disinformation: First, alternative media are identified as potential publishers of disinformation. Second, potential publishers of disinformation are identified via fact-checking websites. Samples created using those approaches can partly overlap. However, the two approaches differ in terms of validity and comprehensiveness of the identified population. Sampling of alternative media outlets is theory-driven and allows for cross-national comparison. However, researchers face the challenge to identify misinforming content published by alternative media outlets. In contrast, fact-checked content facilitates the identification of a given disinformation population; however, fact-checker often have a publication bias focusing on a small range of (elite) actors or sources (e.g. individual blogs, hyper partisan news outlets, or politicians). In both approaches it is important to describe, compare and, if possible, assign the outlets to already existing categories in order to enable a temporal and spatial comparison

    Popularity on Facebook during election campaigns: an analysis of issues and emotions in parties’ online communication

    Full text link
    Successful communication strategies on social media are of great concern for parties’ election campaigns. Research increasingly focuses on identifying which factors promote popularity cues (e.g., Likes or Shares) as indicators of success. However, existing studies have neglected the role of issues in multiparty environments. Furthermore, it is still unclear whether positive or negative emotions are the stronger drivers of user engagement. We investigate parties’ emphasis on political issues and emotions as success factors in their election campaign communication on Facebook. We analyze the Facebook pages of the 6 largest parties in Germany and Austria before the respective national elections in 2017. We find that parties’ top issues, identity issues, and positive and negative emotions increase popularity cues. Yet these factors trigger different types of reactions: Whereas Shares are triggered by the use of top issues and positive emotions, Comments are evoked by identity issues and predominantly by negative emotions

    Perceptions of disinformation, media coverage and government policy related to the Coronavirus – survey findings from six Western countries

    Full text link
    How do citizens evaluate the role of the press and governments during this unseen crisis and how concerned are they about false information? In this report we discuss perceptions of governments’ dealing, media coverage and concern about disinformation across six Western democracies. Further-more, we present current findings on the willingness of individuals to spread disinformation on social media concerning COVID-19. To investigate this, we conducted a large scale survey with active social media users in Belgium, Germany, France, Switzerland, United Kingdom and the United States over a period of three weeks (April 16 to May 6 2020, N = 7.0141). During these three weeks, almost all six countries were still under strict country-specific containment measures (“lockdowns”) to prevent the rapid spread of the coronavirus (European Commission 2020; Matrajt & Leung 2020). In this report we present some first findings

    Was steigert die Facebook-Resonanz? Eine Analyse der Likes, Shares und Comments im Schweizer Wahlkampf 2015

    Get PDF
    Soziale Medien sind aus WahlkĂ€mpfen nicht mehr wegzudenken. Sie bieten politischen Akteuren die Möglichkeit, sich mit ihren eigenen Botschaften direkt an die WĂ€hlerschaft zu richten. Durch virale Verbreitung und eine hohe Resonanz können Inhalte zudem an Nutzergruppen gelangen, die außerhalb des Social-Media-Netzwerkes der Akteure liegen. Damit erhöht sich fĂŒr politische Akteure die Wahrscheinlichkeit, auch eine potenziell neue WĂ€hlerschaft auf Social Media zu erreichen. Soziale Netzwerkplattformen wie Facebook quantifizieren die Resonanz ĂŒber die Reaktionen der Nutzer: Anhand der Anzahl Likes, Shares und Comments (Facebook-Resonanz), die ein Facebook-Beitrag erreicht. Die vorliegende Studie befasst sich mit der Frage, welche Merkmale (Format, Zeitpunkt und Inhalt) BeitrĂ€ge aufweisen mĂŒssen, um besonders viele Nutzerreaktionen hervorzurufen und damit eine möglichst hohe Facebook-Resonanz zu erzeugen. Eine quantitative Inhaltsanalyse von 733 Facebook-BeitrĂ€gen der sieben grĂ¶ĂŸten im Schweizer Parlament vertretenen Parteien im Zeitraum von drei Monaten vor dem Wahltermin 2015 zeigt, dass vor allem die Verwendung von Nachrichtenfaktoren sowie parteieigener Themen hilfreich ist, um die Facebook-Resonanz zu erhöhen. Social media have become an integral part of election campaigns. They offer political actors the opportunity to address their own messages directly to the electorate. Through viral distribution and a high response rate, content can also reach user groups outside the actors’ social media network. This increases the probability for political actors to reach a potentially new electorate on social media. Social network platforms such as Facebook (FB) quantify the response rate via the reactions of the users: Based on the number of likes, shares and comments (total FB-Reactions) that a FB post receives. The present study deals with the question of which characteristics (format, timing and content) FB posts must have in order to trigger a particularly large number of user reactions. A quantitative content analysis of 733 Facebook posts published by the seven largest parties represented in the Swiss Parliament during the election campaign period 2015 shows that the use of news factors and party-specific issues is particularly helpful in increasing FB-Reactions
    corecore